Kappa Test for Agreement Between Two Raters

ثبت نشده
چکیده

Introduction This module computes power and sample size for the test of agreement between two raters using the kappa statistic. The power calculations are based on the results in Flack, Afifi, Lachenbruch, and Schouten (1988). Calculations are based on ratings for k categories from two raters or judges. You are able to vary category frequencies on a single run of the procedure to analyze a wide range of scenarios all at once. For further information about kappa analysis, see chapter 18 of Fleiss, Levin, and Paik (2003).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Functional Movement Screen in Elite Boy Basketball Players: A Reliability Study

Purpose: To investigate the reliability of Functional Movement Screen (FMS) in basketball players. A few studies have compared the reliability of FMS between raters with different experience in athletes. The purpose of this study was to compare the FMS scoring between the beginners and expert raters using video records.  Methods: This is a cross-sectional study. The study subjects compris...

متن کامل

Agreement Between an Isolated Rater and a Group of 1 Raters

11 The agreement between two raters judging items on a categorical scale 12 is traditionally measured by Cohen’s kappa coefficient. We introduce a new 13 coefficient for quantifying the degree of agreement between an isolated rater 14 and a group of raters on a nominal or ordinal scale. The coefficient, which 15 is defined on a population-based model, requires a specific definition of the 16 co...

متن کامل

Effect of standardized training on the reliability of the Cochrane risk of bias assessment tool: a prospective study

BACKGROUND The Cochrane risk of bias tool is commonly criticized for having a low reliability. We aimed to investigate whether training of raters, with objective and standardized instructions on how to assess risk of bias, can improve the reliability of the Cochrane risk of bias tool. METHODS In this pilot study, four raters inexperienced in risk of bias assessment were randomly allocated to ...

متن کامل

Reliability of the modified Rankin Scale across multiple raters: benefits of a structured interview.

BACKGROUND AND PURPOSE The modified Rankin Scale (mRS) is widely used to assess global outcome after stroke. The aim of the study was to examine rater variability in assessing functional outcomes using the conventional mRS, and to investigate whether use of a structured interview (mRS-SI) reduced this variability. METHODS Inter-rater agreement was studied among raters from 3 stroke centers. F...

متن کامل

Kappa — A Critical Review

The Kappa coefficient is widely used in assessing categorical agreement between two raters or two methods. It can also be extended to more than two raters (methods). When using Kappa, the shortcomings of this coefficient should be not neglected. Bias and prevalence effects lead to paradoxes of Kappa. These problems can be avoided by using some other indexes together, but the solutions of the Ka...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015